44 research outputs found

    A fuzzy c-means bi-sonar-based Metaheuristic Optimization Algorithm

    Get PDF
    Fuzzy clustering is an important problem which is the subject of active research in several real world applications. Fuzzy c-means (FCM) algorithm is one of the most popular fuzzy clustering techniques because it is efficient, straightforward, and easy to implement. Fuzzy clustering methods allow the objects to belong to several clusters simultaneously, with different degrees of membership. Objects on the boundaries between several classes are not forced to fully belong to one of the classes, but rather are assigned membership degrees between 0 and 1 indicating their partial membership. However FCM is sensitive to initialization and is easily trapped in local optima. Bi-sonar optimization (BSO) is a stochastic global Metaheuristic optimization tool and is a relatively new algorithm. In this paper a hybrid fuzzy clustering method FCB based on FCM and BSO is proposed which makes use of the merits of both algorithms. Experimental results show that this proposed method is efficient and reveals encouraging results

    On Improving Ratio/Product Estimator by Ratio/Product-cum-Mean-per-Unit Estimator Targeting More Efficient Use of Auxiliary Information

    Get PDF
    To achieve a more efficient use of auxiliary information we propose single-parameter ratio/product-cum-mean-per-unit estimators for a finite population mean in a simple random sample without replacement when the magnitude of the correlation coefficient is not very high (less than or equal to 0.7). The first order large sample approximation to the bias and the mean square error of our proposed estimators are obtained. We use simulation to compare our estimators with the well-known sample mean, ratio, and product estimators, as well as the classical linear regression estimator for efficient use of auxiliary information. The results are conforming to our motivating aim behind our proposition

    Assessment of inhalation technique and predictors of incorrect performance among patients of chronic obstructive pulmonary disease and asthma

    Get PDF
    Background: Poor inhalation technique is responsible for decreasing the efficacy of topical drug therapy in asthma and chronic obstructive pulmonary                 disease (COPD). Certain steps of the inhalation technique are erred most often and if ascertained, can be rectified leading to an overall improvement in the technique. The predictors for poor use can also be marked.Methods: Inhaler technique for pressurised metered dose inhalers (pMDI), pressurised metered dose inhaler with spacer and dry powder inhaler (DPI) was assessed in one hundred and five patients who fulfilled the inclusion and exclusion criteria and were enrolled in this study. Inhaler technique was assessed using an ERAS/ISAM Task force report based scores and the lung function using pulmonary function test (PFT).The technique was re-assessed and scored after a period of three months along with the assessment of the lung function by PFT.Results: The mean of ERS/ISAM task force report based score for evaluation of the techniques of inhalation increased from 5.79±2.58 to 8.23±2.41 (p<0.0001) after intervention. The most commonly committed error in the inhalation technique was in step number eight, ten and four by patients using pMDI, pMDI with spacer and DPI, respectively. The faulty technique being the dependent variable/outcome could be explained 16% by the type of inhaler used (r2 = 0.1607) and this is statistically significant (p<0.0001), thus the type of inhaler used was a predictor of poor use.Conclusions: Inhaler techniques improved with systematic training and there was a trend towards improvement in lung function, hence the clinical condition

    Cathodoluminescence Studies of Nanoindented CdZnTe Single Crystal Substrates for Analysis of Residual Stresses and Deformation Behaviour

    Get PDF
    Nanoindentation-induced residual stresses were analysed on (111) Te face CdZnTe single-crystal substrates in this study. CdZnTe substrates were subjected to nanoindentation using cube corner indenter geometry with a peak load of 10 mN. Loading rates of 1 mN/s and 5 mN/s were used in the experiments, with a holding time of 10 s at peak load. Residual stresses on the indented region were analysed from load-displacement curves and explained using dislocation generation and elastic recovery mechanisms. Residual stresses were found to be of compressive type, just on the indented surface. The slip lines along the slip directions of this material were clearly visible in the FE-SEM images of the indents. Indents and surrounding surfaces were characterized using the Cathodoluminescence (CL) technique. CL mapping of the indented surface revealed the dislocation generation and their propagation behaviour just beneath the indenter as well as in the surrounding surfaces. The dislocations act as non-radiative recombination centres and quench the CL intensity locally. Dark lines were explained as the presence of dislocations in the material. CL mapping analysis shows that both the rosette glide and tetrahedral glide of dislocations are the primary deformation mechanisms present in CdZnTe. A rosette structure was observed in the CL mapping. CL spectra at 300 K of un-deformed CdZnTe show a peak at 810 nm wavelength, which corresponds to near-band-edge emission. After indentation, the CL spectra show the peak intensity at 814 nm and 823 nm wavelengths at the edge of the indents created with a loading rate of 1 mN/s and 5 mN/s, respectively. These peak shifts from 810 nm were attributed to the tensile residual stresses present in the indented material

    Efficient Quadrature Using Bernstein’s Polynomial Weights via Fusion of Two Dual-Perspectives

    No full text
    A new polynomial quadrature operator has been proposed which uses weight-functions of the well-known Bernsteins Polynomial operator in its improved structure, achieved through a rather-ingenious Fusion of two dual perspectives. These weights are functions of the impugned variable of the unknown function being approximated, and are not mere constants. The new quadrature formula has been compared empirically with that quadrature using the well-known Bernstein Operator. The percentage absolute relative errors for the proposed quadrature formula and that with the Bernstein Operator have been computed for certain selected functions and with different number of node points in the interval of quadrature. It has been observed that the proposed quadrature formula produces exceedingly-significantly better results

    An Iterative Algorithm for Efficient Estimation of the Mean of a Normal Population Using Computational-Statistical Intelligence & Sample Counterpart of Rather-Very-Large Though Unknown Coefficient of Variation with a Small- Sample

    No full text
    This paper addresses the issue of finding the most efficient estimator of the normal population mean when the population “Coefficient of Variation (C. V.)” is ‘Rather-Very-Large’ though unknown, using a small sample (sample-size ≤ 30). The paper proposes an “Efficient Iterative Estimation Algorithm exploiting sample “C. V.” for an efficient Normal Mean estimation”. The MSEs of the estimators per this strategy have very intricate algebraic expression depending on the unknown values of population parameters, and hence are not amenable to an analytical study determining the extent of gain in their relative efficiencies with respect to the Usual Unbiased Estimator (sample mean ~ Say ‘UUE’). Nevertheless, we examine these relative efficiencies of our estimators with respect to the Usual Unbiased Estimator, by means of an illustrative simulation empirical study. MATLAB 7.7.0.471 (R2008b) is used in programming this illustrative ‘Simulated Empirical Numerical Study’.DOI: 10.15181/csat.v4i1.1091 </p

    An Estimation Error Corrected Sharpe Ratio Using Bootstrap Resampling

    No full text
    Abstract The Sharpe ratio is a common financial performance measure that represents the optimal risk versus return of an investment portfolio, also defined as the slope of the capital market line within the mean-variance Markowitz efficient frontier. Obtaining sample point and confidence interval estimates for this metric is challenging due to both its dynamic nature and issues surrounding its statistical properties. Given the importance of obtaining robust determinations of risk versus return within financial portfolios, the purpose of the current research was to improve the statistical estimation error associated with Sharpe&apos;s ratio, offering an approach to point and confidence interval estimation which employs bootstrap resampling and computational intelligence. This work also extends prior studies by minimizing the ratio&apos;s statistical estimation error first by incorporating the common assumption that the ratio&apos;s loss function is the squared error and second by correcting for overestimation through an approach that recognizes that th

    Increased Statistical Efficiency in a Lognormal Mean Model

    No full text
    Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006). The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation

    Bounds for the variance of an inverse binomial estimator

    No full text
    Summary  Best [1] found the variance of the minimum variance unbiased estimator of the parameter p of the negative binomial distribution. Mikulski and Sm [2] gave an upper bound to it, easier to calculate than Best's expression and a good approximation for small values of p and large values of r (the number of successes). In this paper both lower bounds and closer upper bounds are derived. Copyrigh
    corecore